我们介绍了以实体为中心的查询精炼的任务。给定一个输入查询,其答案是(潜在的)实体集合,任务输出是一小部分查询精炼,旨在帮助用户进行有效的域探索和实体发现。我们提出了一种为此任务创建培训数据集的方法。对于给定的输入查询,我们使用现有的知识基础分类法作为候选查询改进的来源,并使用旨在将回答输入查询的实体集的搜索过程中的搜索过程中的一组搜索过程中选择。我们证明我们的方法确定了人类注释者认为有趣,全面和不冗余的精炼集。此外,我们发现,在新结构的数据集中训练的文本生成模型能够为现有分类法所没有涵盖的新型查询提供改进。我们的代码和数据可在https://github.com/google-research/language/tree/master/master/language/qresp上找到。
translated by 谷歌翻译
已显示通用非结构化神经网络在分布外的组成概述上挣扎。通过示例重组的组成数据增强已经转移了一些关于组成性的关于多个语义解析任务的黑盒神经模型的先前知识,但这通常需要特定于任务的工程或提供有限的收益。我们使用称为组成结构学习者(CSL)的型号提供更强大的数据重组方法。 CSL是一种具有拟同步无线语法骨干的生成模型,我们从训练数据中诱导。我们从CSL中进行重组的例子,并将其添加到预先训练的序列到序列模型(T5)的微调数据中。该程序有效地将大多数CSL的组成偏差转移到T5以进行诊断任务,并导致模型比在两个真实世界的组成泛化任务上的T5-CSL集合更强。这导致新的最先进的性能,这些挑战性的语义解析任务需要泛化自然语言变异和元素的新组成。
translated by 谷歌翻译
Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Emotions play an important role in interpersonal interactions and social conflict, yet their function in the development of controversy and disagreement in online conversations has not been explored. To address this gap, we study controversy on Reddit, a popular network of online discussion forums. We collect discussions from a wide variety of topical forums and use emotion detection to recognize a range of emotions from text, including anger, fear, joy, admiration, etc. Our study has three main findings. First, controversial comments express more anger and less admiration, joy and optimism than non-controversial comments. Second, controversial comments affect emotions of downstream comments in a discussion, usually resulting in long-term increase in anger and a decrease in positive emotions, although the magnitude and direction of emotional change depends on the forum. Finally, we show that emotions help better predict which comments will become controversial. Understanding emotional dynamics of online discussions can help communities to better manage conversations.
translated by 谷歌翻译
The demonstrated success of transfer learning has popularized approaches that involve pretraining models from massive data sources and subsequent finetuning towards a specific task. While such approaches have become the norm in fields such as natural language processing, implementation and evaluation of transfer learning approaches for chemistry are in the early stages. In this work, we demonstrate finetuning for downstream tasks on a graph neural network (GNN) trained over a molecular database containing 2.7 million water clusters. The use of Graphcore IPUs as an AI accelerator for training molecular GNNs reduces training time from a reported 2.7 days on 0.5M clusters to 1.2 hours on 2.7M clusters. Finetuning the pretrained model for downstream tasks of molecular dynamics and transfer to a different potential energy surface took only 8.3 hours and 28 minutes, respectively, on a single GPU.
translated by 谷歌翻译
In a scenario with multiple persons talking simultaneously, the spatial characteristics of the signals are the most distinct feature for extracting the target signal. In this work, we develop a deep joint spatial-spectral non-linear filter that can be steered in an arbitrary target direction. For this we propose a simple and effective conditioning mechanism, which sets the initial state of the filter's recurrent layers based on the target direction. We show that this scheme is more effective than the baseline approach and increases the flexibility of the filter at no performance cost. The resulting spatially selective non-linear filters can also be used for speech separation of an arbitrary number of speakers and enable very accurate multi-speaker localization as we demonstrate in this paper.
translated by 谷歌翻译
使用多个麦克风进行语音增强的主要优点是,可以使用空间滤波来补充节奏光谱处理。在传统的环境中,通常单独执行线性空间滤波(波束形成)和单通道后过滤。相比之下,采用深层神经网络(DNN)有一种趋势来学习联合空间和速度 - 光谱非线性滤波器,这意味着对线性处理模型的限制以及空间和节奏单独处理的限制光谱信息可能可以克服。但是,尚不清楚导致此类数据驱动的过滤器以良好性能进行多通道语音增强的内部机制。因此,在这项工作中,我们通过仔细控制网络可用的信息源(空间,光谱和时间)来分析由DNN实现的非线性空间滤波器的性质及其与时间和光谱处理的相互依赖性。我们确认了非线性空间处理模型的优越性,该模型在挑战性的扬声器提取方案中优于Oracle线性空间滤波器,以低于0.24的POLQA得分,较少数量的麦克风。我们的分析表明,在特定的光谱信息中应与空间信息共同处理,因为这会提高过滤器的空间选择性。然后,我们的系统评估会导致一个简单的网络体系结构,该网络体系结构在扬声器提取任务上的最先进的网络体系结构优于0.22 POLQA得分,而CHIME3数据上的POLQA得分为0.32。
translated by 谷歌翻译
采用深层神经网络(DNN)直接学习多通道语音增强的过滤器,这可能是将线性空间过滤器与独立的节奏光谱后过滤器相结合的传统方法的两个关键优势:1)非线性空间过滤器克服源自线性处理模型的潜在限制和2)空间和速度光谱信息的关节处理可以利用不同信息来源之间的相互依赖性。最近提出了各种基于DNN的非线性过滤器,报告了良好的增强性能。但是,对于将网络体系结构设计变成机会游戏的内部机制知之甚少。因此,在本文中,我们执行实验,以更好地了解基于DNN的非线性过滤器对空间,光谱和时间信息的内部处理。一方面,我们在艰难的语音提取方案中的实验证实了非线性空间滤波的重要性,该空间过滤的重要性超过了Oracle线性空间滤波器,高于0.24 POLQA得分。另一方面,我们证明了联合处理导致较大的性能差距,除了空间信息之外,在利用光谱与时间信息的网络体系结构之间得分为0.4 POLQA得分。
translated by 谷歌翻译
现有域适应(DA)算法训练目标模型,然后使用目标模型对目标数据集中的所有样本进行分类。虽然这种方法试图解决源和目标数据来自不同分布的问题,但它无法认识到目标域内的可能性,某些样本比目标域更接近源域的分布领域。在本文中,我们开发了一种新颖的DA算法,即强制转移,该算法涉及这种情况。解决这一难题的一个直接但有效的想法是,使用分布外检测算法来决定在测试阶段,给定样品是否更接近源域,目标域或两者都不接近。在第一种情况下,该样本将提供给对源样本培训的机器学习分类器。在第二种情况下,该样本将提供给对目标样本训练的机器学习分类器。在第三种情况下,该样本被丢弃,因为既不是在源训练的ML模型,也不是在目标上训练的ML模型不适合对其进行分类。众所周知,神经网络中的前几个层提取了低级特征,因此可以从三种不同情况下对样品进行分类,以在三种不同情况下经验确定的层后进行样品的激活分类。强制转移实现了这个想法。在三种类型的DA任务上,我们优于与之相比的最新算法。
translated by 谷歌翻译